By Alex Hughes

Published: Friday, 21 January 2022 at 12:00 am


A robot arm, a machine-learning algorithm and a brain-computer interface have been combined to create a means to help tetraplegic patients (those who can’t move their upper or lower body) interact with their world. While this isn’t the first time a brain interface has been used to control a robot, it has taken the technology a step further by estimating and understanding brain signals without input from the patient.

This research was completed by researchers at the Swiss Federal Institue of Technology Lausanne (EPFL). Professor Aude Billard, the head of EPFL’s Learning Algorithms and Systems Laboratory and José del R. Millán, previously the head of EPFL’s Brain-Machine interface Laboratory, worked together to create a computer program that can control a robot using electrical signals from a patient’s brain.

The team employed a machine-learning algorithm to interpret signals from the patient’s brain and translate them into the articulation of a robot arm.

The patient’s brain activity was monitored by an EEG cap – which effectively scans the electrical activity inside your head. These brain waves would then be sent through a computer to be interpreted by the machine-learning algorithm. The algorithm translates the brain signals when the patient notices an error, inferring automatically when the brain doesn’t like a certain action.

In the team’s research, they used the robot arm with a glass. The arm would move towards the glass and the patient’s brain would decide if they felt it was too close or too far away. The process is repeated until the robot understands the  optimal route for the indiviudal’s preference – not too close to be a risk but not so far away to waste movement.

“The brain signals that we are recording will be never be the same. We have a variability over time and this is natural. Why? Because if I move my hand, the brain is not only focused on that, the brain is processing many other things,” said Millán. “So the fact that there is this variability means that our decoder will never be 100 per cent accurate.

"©

However, through the machine-learning algorithm used in this research, the robot can gain a better understanding of variability to predict brain signals in certain situations. For example, the distance preference when moving past a glass or, in a practical circumstance, how close a tetraplegic patient in a wheelchair is willing to get to other people in the street.

Implementing the algorithm into a wheelchair is an example of where the technology could go in the future. This would allow people in wheelchairs to have greater control over their movements, speeds and general safety. The algorithm could interpret brain signals to understand a user’s speed preference, the distance they are happy to be from obstacles and people and even the level of risk they are willing to take in certain circumstances, for example if they’re late or somewhere busy.

It’s interesting to use this algorithm over using speech, for instance, because there are things that you cannot necessarily easily articulate,” said Billard. “A layperson may not be able to articulate that they don’t like the acceleration of a wheelchair for example. What is it that you don’t like exactly? How does that translate into a control parameter afterwards?”

This is where the technology stands out from other disability aids available. By allowing the algorithm to understand the signals from your brain, it can interpret exact feelings that an individual couldn’t explain themselves. However, this does require consistency over time from the algorithm and for the detection to have proved statistically significant.

Without that consistency, the algorithm could be thrown off in real life situations. If, for example, someone was driving a wheelchair in a crowd and passed people in an argument, the person could generate an error which has nothing to do with the driving experience.